Telegram Group & Telegram Channel
Forwarded from RIML Lab (Amir Kasaei)
💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: Backdooring Bias into Text-to-Image Models

🔸 Presenter: Mehrdad Aksari Mahabadi

🌀 Abstract:
This paper investigates the misuse of text-conditional diffusion models, particularly text-to-image models, which create visually appealing images based on user descriptions. While these images generally represent harmless concepts, they can be manipulated for harmful purposes like propaganda. The authors show that adversaries can introduce biases through backdoor attacks, affecting even well-meaning users. Despite users verifying image-text alignment, the attack remains hidden by preserving the text's semantic content while altering other image features to embed biases, amplifying them by 4-8 times. The study reveals that current generative models make such attacks cost-effective and feasible, with costs ranging from 12 to 18 units. Various triggers, objectives, and biases are evaluated, with discussions on mitigations and future research directions.

📄 Paper: Backdooring Bias into Text-to-Image Models

Session Details:
- 📅 Date: Sunday
- 🕒 Time: 5:00 - 6:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban


We look forward to your participation! ✌️



tg-me.com/RIMLLab/143
Create:
Last Update:

💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: Backdooring Bias into Text-to-Image Models

🔸 Presenter: Mehrdad Aksari Mahabadi

🌀 Abstract:
This paper investigates the misuse of text-conditional diffusion models, particularly text-to-image models, which create visually appealing images based on user descriptions. While these images generally represent harmless concepts, they can be manipulated for harmful purposes like propaganda. The authors show that adversaries can introduce biases through backdoor attacks, affecting even well-meaning users. Despite users verifying image-text alignment, the attack remains hidden by preserving the text's semantic content while altering other image features to embed biases, amplifying them by 4-8 times. The study reveals that current generative models make such attacks cost-effective and feasible, with costs ranging from 12 to 18 units. Various triggers, objectives, and biases are evaluated, with discussions on mitigations and future research directions.

📄 Paper: Backdooring Bias into Text-to-Image Models

Session Details:
- 📅 Date: Sunday
- 🕒 Time: 5:00 - 6:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban


We look forward to your participation! ✌️

BY RIML Lab




Share with your friend now:
tg-me.com/RIMLLab/143

View MORE
Open in Telegram


RIML Lab Telegram | DID YOU KNOW?

Date: |

A Telegram spokesman declined to comment on the bond issue or the amount of the debt the company has due. The spokesman said Telegram’s equipment and bandwidth costs are growing because it has consistently posted more than 40% year-to-year growth in users.

What is Secret Chats of Telegram

Secret Chats are one of the service’s additional security features; it allows messages to be sent with client-to-client encryption. This setup means that, unlike regular messages, these secret messages can only be accessed from the device’s that initiated and accepted the chat. Additionally, Telegram notes that secret chats leave no trace on the company’s services and offer a self-destruct timer.

RIML Lab from sg


Telegram RIML Lab
FROM USA